Singular value decomposition (SVD) is one of the most popular compression methods that approximate a target matrix with smaller matrices. However, standard SVD treats the parameters within the matrix with equal importance, which is a simple but unrealistic assumption. The parameters of a trained neural network model may affect task performance unevenly, which suggests non-equal importance among the parameters. Compared to SVD, the decomposition method aware of parameter importance is the more practical choice in real cases. Unlike standard SVD, weighted value decomposition is a non-convex optimization problem that lacks a closed-form solution. We systematically investigated multiple optimization strategies to tackle the problem and examined our method by compressing Transformer-based language models. Further, we designed a metric to predict when the SVD may introduce a significant performance drop, for which our method can be a rescue strategy. The extensive evaluations demonstrate that our method can perform better than current SOTA methods in compressing Transformer-based language models.
translated by 谷歌翻译
基于学习的视觉探针计(VO)算法在常见的静态场景上实现了显着的性能,受益于高容量模型和大量注释的数据,但在动态,填充的环境中往往会失败。语义细分在估计摄像机动作之前主要用于丢弃动态关联,但以丢弃静态功能为代价,并且很难扩展到看不见的类别。在本文中,我们利用相机自我运动和运动分割之间的相互依赖性,并表明两者都可以在单个基于学习的框架中共同完善。特别是,我们提出了Dytanvo,这是第一个涉及动态环境的基于学习的VO方法。它需要实时两个连续的单眼帧,并以迭代方式预测相机的自我运动。我们的方法在现实世界动态环境中的最先进的VOUTESS的平均提高27.7%,甚至在动态视觉SLAM系统中进行竞争性,从而优化了后端的轨迹。在很多看不见的环境上进行的实验也证明了我们的方法的普遍性。
translated by 谷歌翻译
将大型矩阵分配到小矩阵中是模型压缩的流行策略。奇异值分解(SVD)在这种压缩策略中起着至关重要的作用,近似具有较少参数的学习矩阵。但是,SVD最大程度地减少了平方误差以重建原始矩阵而不衡量参数的重要性,从而为那些影响任务准确性的人提供了更大的重建误差。换句话说,SVD的优化目标与受过训练的模型的任务准确性不符。我们通过引入Fisher信息来权衡影响模型预测的参数的重要性来分析此先前未开发的问题,进行观察并解决该问题。这个想法导致了我们的方法:Fisher加权SVD(FWSVD)。尽管我们方法的分解矩阵并没有导致较小的重建错误,但我们发现我们所得的任务准确性更接近原始模型的性能。我们使用基于变压器的语言模型进行分析,显示我们的加权SVD很大程度上减轻了不匹配的优化目标,并可以以更高的压缩率维持模型性能。我们的方法可以直接压缩特定于任务的模型,同时比需要昂贵的模型预训练的其他紧凑型模型策略更好。此外,对压缩模型的评估表明,我们的方法可以进一步降低9%至30%的参数,对任务准确性产生不大的影响。
translated by 谷歌翻译
域分类是自然语言理解(NLU)中的基本任务,通常需要快速住宿到新的新兴域。即使新模型可访问,此约束使其无法培育所有先前的域。大多数现有的持续学习方法患有低精度和性能波动,特别是当旧数据和新数据的分布显着不同时。事实上,关键的真实问题不是没有旧数据的,而是效率效率恢复模型与整个旧数据集。是否有可能利用一些旧数据来产生高精度并保持稳定的性能,同时在不引入额外的普通公共表?在本文中,我们提出了一个可在各种环境下稳定地产生高性能的文本数据的一个封路数据不断学习模型。具体地,我们利用Fisher信息选择可以“记录”原始模型的关键信息的示例。此外,提出了一种称为动态重量整合的新颖方案,以在恢复过程中启用自由的自由学习。广泛的实验表明基线患有波动的性能,因此在实践中无用。相反,我们建议的CCFI显着且始终如一地优于平均精度高达20%的最佳最新方法,CCFI的每个组件有效地贡献了整体性能。
translated by 谷歌翻译
诸如BERT的预先接受的语言模型在各种自然语言处理任务中显示出显着的效果。但是,这些模型通常包含数百万个参数,这可以防止它们在资源受限设备上实际部署。已知知识蒸馏,重量修剪和量化是模型压缩中的主要方向。然而,通过知识蒸馏获得的紧凑型模型即使对于相对小的压缩比也可能遭受显着的精度下降。另一方面,只有少数量化尝试专门用于自然语言处理任务。它们患有小的压缩比或较大的错误率,因为需要对超参数的手动设置,并且不支持微粒子组 - 方向量化。在本文中,我们提出了一种自动混合精密量化框架,设计用于伯特,其可以同时在亚组 - 明智的水平中进行量化和修剪。具体而言,我们所提出的方法利用可微分的神经结构搜索,搜索自动地分配每个子组中的参数的比例和精度,同时捕获冗余参数组。对BERT下游任务的广泛评估揭示了我们所提出的方法通过提供相同的模型尺寸来实现相同的性能。我们还通过将我们的解决方案与Ottherbert等正交方法相结合来展示获得极其轻量级模型的可行性。
translated by 谷歌翻译
超越在分销数据上的测试上,在分销(OOD)检测中最近的普及方式增加了。最近尝试分类OOD数据介绍了接近和远远检测的概念。具体而言,先前作品在检测难度方面定义了OOD数据的特征。我们建议使用两种类型的分布换档来表征ood数据的频谱:协变速和概念转移,其中协变速转移对应于样式的变化,例如噪声和概念移位表示语义的变化。该表征揭示了对每种类型的敏感性对OOD数据的检测和置信校准是重要的。因此,我们调查了捕获对改善它们的每种类型数据集偏移和方法的敏感性的得分功能。为此,我们从理论上得出了两个分数函数,用于ood检测,协变速分数和概念换档分数,基于对均分数的kl分解,并提出了一种几何启发方法(几何奥丁)来改善ood检测在两个班次下,只有分发数据。另外,所提出的方法自然地导致表现力的后HOC校准函数,其在分配和分发数据中产生最先进的校准性能。我们是第一个提出一种跨越检测和校准以及不同类型的班次工作的方法的方法。查看https://sites.google.com/view/geometric-decomposition的project页面。
translated by 谷歌翻译
Deep neural networks have attained remarkable performance when applied to data that comes from the same distribution as that of the training set, but can significantly degrade otherwise. Therefore, detecting whether an example is out-of-distribution (OoD) is crucial to enable a system that can reject such samples or alert users. Recent works have made significant progress on OoD benchmarks consisting of small image datasets. However, many recent methods based on neural networks rely on training or tuning with both in-distribution and out-of-distribution data. The latter is generally hard to define a-priori, and its selection can easily bias the learning. We base our work on a popular method ODIN 1 [21], proposing two strategies for freeing it from the needs of tuning with OoD data, while improving its OoD detection performance. We specifically propose to decompose confidence scoring as well as a modified input pre-processing method. We show that both of these significantly help in detection performance. Our further analysis on a larger scale image dataset shows that the two types of distribution shifts, specifically semantic shift and non-semantic shift, present a significant difference in the difficulty of the problem, providing an analysis of when ODIN-like strategies do or do not work.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译